8 research outputs found

    Appling Andragogy Theory in Photoshop Training Programs

    Get PDF
    Andragogy is a strategy for teaching adults that can be applied to Photoshop training. Photoshop workshops are frequented by adult learners, and thus andragogical models for instruction would be extremely helpful for prospective trainers looking to improve their classroom designs. Adult learners are much different than child learners, given the fact that adult learners tend to be much more intrinsically motivated to be in classrooms than children are. Adult learners are typically taking on instruction because it is conducive towards progress in some goal, whereas child learners are in classrooms because of mandates. This study will cover the six assumptions of andragogy and how they can be applied towards a successful Photoshop workshop. The goal of this study is to encourage future Photoshop trainers to begin incorporating andragogical principles in their classroom instruction. Keywords: Photoshop training programs, Adult learning, Andragogy

    Squamous cell carcinoma of the lip after allogeneic hemopoietic stem cell transplantation

    Get PDF
    Allogeneic hemopoietic stem cell transplantation (HSCT) has been considered a curative treatment option for many hematological and non-hematological disorders. Despite the use of advanced methods of tissue typing and new therapies, graft versus host disease (GVHD) remains a major obstacle. Secondary malignancies are also among the most serious long-term complications after HSCT including leukemia, lymphomas, and to a lesser extent, solid tumors. The most commonly observed solid tumor is squamous cell carcinoma (SCC). We report two cases of SCC of the lower lip diagnosed several years after HSCT. Both cases were complicated with GVHD prior to the development of SCC and had a successful outcome with minimal surgical intervention

    Bridged-U-Net-ASPP-EVO and Deep Learning Optimization for Brain Tumor Segmentation

    No full text
    Brain tumor segmentation from Magnetic Resonance Images (MRI) is considered a big challenge due to the complexity of brain tumor tissues, and segmenting these tissues from the healthy tissues is an even more tedious challenge when manual segmentation is undertaken by radiologists. In this paper, we have presented an experimental approach to emphasize the impact and effectiveness of deep learning elements like optimizers and loss functions towards a deep learning optimal solution for brain tumor segmentation. We evaluated our performance results on the most popular brain tumor datasets (MICCAI BraTS 2020 and RSNA-ASNR-MICCAI BraTS 2021). Furthermore, a new Bridged U-Net-ASPP-EVO was introduced that exploits Atrous Spatial Pyramid Pooling to enhance capturing multi-scale information to help in segmenting different tumor sizes, Evolving Normalization layers, squeeze and excitation residual blocks, and the max-average pooling for down sampling. Two variants of this architecture were constructed (Bridged U-Net_ASPP_EVO v1 and Bridged U-Net_ASPP_EVO v2). The best results were achieved using these two models when compared with other state-of-the-art models; we have achieved average segmentation dice scores of 0.84, 0.85, and 0.91 from variant1, and 0.83, 0.86, and 0.92 from v2 for the Enhanced Tumor (ET), Tumor Core (TC), and Whole Tumor (WT) tumor sub-regions, respectively, in the BraTS 2021validation dataset

    Effect of Conventional and Electronic Cigarettes Smoking on the Color Stability and Translucency of Tooth Colored Restorative Materials: An In Vitro Analysis

    No full text
    This in vitro study compared the effects of conventional and electronic cigarettes on the aesthetics (color stability and translucency) of two types of composite resins: micro and nano-hybrid. Methods: A total of 120 specimens from two different composite materials Filtek Z250 XT (Nano-hybrid, 3M) and Filtek Z250 (Micro-hybrid, 3M) were divided into four groups (n = 30); shade A2 was used. The samples were exposed to conventional and electronic cigarette smoke via a custom made chamber device. The color values and measurements were recorded using a spectrophotometer before and after the exposure. The color and translucency were evaluated using the three-dimensional CIE Lab. Results: There was a significant change in the color (ΔE) and the translucency parameter (TP) in all of the specimens exposed to electronic cigarettes and conventional cigarettes. The results showed that the highest ΔE mean is for the nano-hybrid composite exposed to conventional cigarettes with 1.74 ΔE while the same material is 0.64 under the electronic cigarettes and the difference is significant with (p < 0.05). The micro-hybrid composite data showed less changes in color under both exposures with 0.85 ΔE mean under the conventional cigarette smoke and 0.48 under the electronic cigarette smoke with (p < 0.004). Conclusions: The conventional cigarette smoke has more effect on the color stability of the composite resins than electronic cigarettes. From a clinical point of view, the effect of smoke exposure on the tested specimens’ color, for the time duration to which the specimens were exposed, were moderate (ΔE < 2). The micro-hybrid composites showed better color stability as compared to the nano-hybrid composites

    U-Net-Based Models towards Optimal MR Brain Image Segmentation

    No full text
    Brain tumor segmentation from MRIs has always been a challenging task for radiologists, therefore, an automatic and generalized system to address this task is needed. Among all other deep learning techniques used in medical imaging, U-Net-based variants are the most used models found in the literature to segment medical images with respect to different modalities. Therefore, the goal of this paper is to examine the numerous advancements and innovations in the U-Net architecture, as well as recent trends, with the aim of highlighting the ongoing potential of U-Net being used to better the performance of brain tumor segmentation. Furthermore, we provide a quantitative comparison of different U-Net architectures to highlight the performance and the evolution of this network from an optimization perspective. In addition to that, we have experimented with four U-Net architectures (3D U-Net, Attention U-Net, R2 Attention U-Net, and modified 3D U-Net) on the BraTS 2020 dataset for brain tumor segmentation to provide a better overview of this architecture’s performance in terms of Dice score and Hausdorff distance 95%. Finally, we analyze the limitations and challenges of medical image analysis to provide a critical discussion about the importance of developing new architectures in terms of optimization

    Transformer architecture-based transfer learning forpoliteness prediction in conversation

    No full text
    Politeness is an essential part of a conversation. Like verbal communication, politeness in textual conversation and social media posts is also stimulating. Therefore, the automatic detection of politeness is a significant and relevant problem. The existing literature generally employs classical machine learning-based models like naive Bayes and Support Vector-based trained models for politeness prediction. This paper exploits the state-of-the-art (SOTA) transformer architecture and transfer learning for respectability prediction. The proposed model employs the strengths of context-incorporating large language models, a feed-forward neural network, and an attention mechanism for representation learning of natural language requests. The trained representation is further classified using a softmax function into polite, impolite, and neutral classes. We evaluate the presented model employing two SOTA pre-trained large language models on two benchmark datasets. Our model outperformed the two SOTA and six baseline models, including two domain-specific transformer-based models using both the BERT and RoBERTa language models. The ablation investigation shows that the exclusion of the feed-forward layer displays the highest impact on the presented model. The analysis reveals the batch size and optimization algorithms as effective parameters affecting the model performance.</p
    corecore